On minimax estimation of a sparse normal mean vector

نویسنده

  • Iain M. Johnstone
چکیده

Mallows has conjectured that among distributions which are Gaussian but for occasional contamination by additive noise, the one having least Fisher information has (two-sided) geometric contamination. A very similar problem arises in estimation of a non-negative vector parameter in Gaussian white noise when it is known also that most, i.e. (1 − ǫ), components are zero. We provide a partial asymptotic expansion of the minimax risk as ǫ → 0. While the conjecture seems unlikely to be exactly true for finite ǫ, we verify it asymptotically up to the accuracy of the expansion. Numerical work suggests the expansion is accurate for ǫ as large as .05. The best l1-estimation rule is first but not second order minimax. The results bear on an earlier study of maximum entropy estimation and various questions in robustness and function estimation using wavelet bases.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Truncated Linear Minimax Estimator of a Power of the Scale Parameter in a Lower- Bounded Parameter Space

 Minimax estimation problems with restricted parameter space reached increasing interest within the last two decades Some authors derived minimax and admissible estimators of bounded parameters under squared error loss and scale invariant squared error loss In some truncated estimation problems the most natural estimator to be considered is the truncated version of a classic...

متن کامل

Mammalian Eye Gene Expression Using Support Vector Regression to Evaluate a Strategy for Detecting Human Eye Disease

Background and purpose: Machine learning is a class of modern and strong tools that can solve many important problems that nowadays humans may be faced with. Support vector regression (SVR) is a way to build a regression model which is an incredible member of the machine learning family. SVR has been proven to be an effective tool in real-value function estimation. As a supervised-learning appr...

متن کامل

Exact Minimax Estimation of the Predictive Density in Sparse Gaussian Models.

We consider estimating the predictive density under Kullback-Leibler loss in an ℓ0 sparse Gaussian sequence model. Explicit expressions of the first order minimax risk along with its exact constant, asymptotically least favorable priors and optimal predictive density estimates are derived. Compared to the sparse recovery results involving point estimation of the normal mean, new decision theore...

متن کامل

`p-norm based James-Stein estimation with minimaxity and sparsity

A new class of minimax Stein-type shrinkage estimators of a multivariate normal mean is studied where the shrinkage factor is based on an `p norm. The proposed estimators allow some but not all coordinates to be estimated by 0 thereby allow sparsity as well as minimaxity. AMS 2000 subject classifications: Primary 62C20; secondary 62J07.

متن کامل

Partially Linear Bayesian Estimation with Application to Sparse Approximations

We address the problem of estimating a random vector X from two sets of measurements Y and Z , such that the estimator is linear in Y . We show that the partially linear minimum mean squared error (PLMMSE) estimator does not require knowing the joint distribution of X and Y in full, but rather only its second-order moments. This renders it of potential interest in various applications. We furth...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1991